10 research outputs found

    A Programmable Display-Layer Architecture for Virtual-Reality Applications

    Get PDF
    Two important technical objectives of virtual-reality systems are to provide compelling visuals and effective 3D user interaction. In this respect, modern virtual reality system architectures suffer from a number of short-comings. The reduction of end-to-end latency, crosstalk and judder are especially difficult challenges, each of which negatively affects visual quality or user interaction. In order to provide higher quality visuals, complex scenes consisting of large models are often used. Rendering such a complex scene is a time-consuming process resulting in high end-to-end latency, thereby hampering user interaction. Classic virtual-reality architectures can not adequately address these challenges due to their inherent design principles. In particular, the tight coupling between input devices, the rendering loop and the display system inhibits these systems from addressing all the aforementioned challenges simultaneously. In this thesis, a virtual-reality architecture design is introduced that is based on the addition of a new logical layer: the Programmable Display Layer (PDL). The governing idea is that an extra layer is inserted between the rendering system and the display. In this way, the display can be updated at a fast rate and in a custom manner independent of the other components in the architecture, including the rendering system. To generate intermediate display updates at a fast rate, the PDL performs per-pixel depth-image warping by utilizing the application data. Image warping is the process of computing a new image by transforming individual depth-pixels from a closely matching previous image to their updated locations. The PDL architecture can be used for a range of algorithms and to solve problems that are not easily solved using classic architectures. In particular, techniques to reduce crosstalk, judder and latency are examined using algorithms implemented on top of the PDL. Concerning user interaction techniques, several six-degrees-of-freedom input methods exists, of which optical tracking is a popular option. However, optical tracking methods also introduce several constraints that depend on the camera setup, such as line-of-sight requirements, the volume of the interaction space and the achieved tracking accuracy. These constraints generally cause a decline in the effectiveness of user interaction. To investigate the effectiveness of optical tracking methods, an optical tracker simulation framework has been developed, including a novel optical tracker to test this framework. In this way, different optical tracking algorithms can be simulated and quantitatively evaluated under a wide range of conditions. A common approach in virtual reality is to implement an algorithm and then to evaluate the efficacy of that algorithm by either subjective, qualitative metrics or quantitative user experiments, after which an updated version of the algorithm may be implemented and the cycle repeated. A different approach is followed here. Throughout this thesis, an attempt is made to automatically detect and quantify errors using completely objective and automated quantitative methods and to subsequently attempt to resolve these errors dynamically

    A Framework for Performance Evaluation of Model-Based Optical Trackers

    No full text
    We describe a software framework to evaluate the performance of model-based optical trackers in virtual environments. The framework can be used to evaluate and compare the performance of different trackers under various conditions, to study the effects of varying intrinsic and extrinsic camera properties, and to study the effects of environmental conditions on tracker performance. The framework consists of a simulator that, given various input conditions, generates a series of images. The input conditions of the framework model important aspects, such as the interaction task, input device geometry, camera properties and occlusion. As a concrete case, we illustrate the usage of the proposed framework for input device tracking in a near-field desktop virtual environment. We compare the performance of an in-house tracker with the ARToolkit tracker under a fixed set of conditions. We also show how the framework can be used to find the optimal camera parameters given a pre-recorded interaction task. Finally, we use the framework to determine the minimum required camera resolution for a desktop, Workbench and CAVE environment. The framework is shown to provide an efficient and simple method to study various conditions affecting optical tracker performance. Furthermore, it can be used as a valuable development tool to aid in the construction of optical trackers

    Three Extensions to Subtractive Crosstalk Reduction

    No full text
    Stereo displays suffer from crosstalk, an effect that reduces or even inhibits the viewer's ability to correctly perceive depth. Previous work on software crosstalk reduction focussed on the preprocessing of static scenes which are viewed from a fixed viewpoint. However, in virtual environments scenes are dynamic, and are viewed from various viewpoints in real-time on large display areas. In this paper, three methods are introduced for reducing crosstalk in virtual environments. A non-uniform crosstalk model is described, which can be used to accurately reduce crosstalk on large display areas. In addition, a novel temporal algorithm is used to address the problems that occur when reducing crosstalk in dynamic scenes. This way, high-frequency jitter caused by the erroneous assumption of static scenes can be eliminated. Finally, a perception based metric is developed that allows us to quantify crosstalk. We provide a detailed description of the methods, discuss their tradeoffs, and compare their performance with existing crosstalk reduction method

    The design and implementation of a VR-architecture for smooth motion

    No full text
    We introduce an architecture for smooth motion in virtual environments. The system performs forward depth image warping to produce images at video refresh rates. In addition to color and depth, our 3D warping approach records per-pixel motion information during rendering of the three-dimensional scene. These enhanced depth images are used to perform per-pixel advection, which considers object motion and view changes. Our dual graphics card architecture is able to render the 3D scene at the highest possible frame rate on one graphics card, while doing the depth image warping on a second graphics engine at video refresh rate. This architecture allows us to compensate for visual artifacts, also called motion judder, arising when the rendering frame rate is lower than the video refresh rate. The evaluation of our method shows motion judder can be effectively removed
    corecore